Learning Distributed Representations of Relational Data Using Linear Relational Embedding

نویسنده

  • Alberto Paccanaro
چکیده

I shall discuss new methods for solving the problem of generalizing from relational data. I consider a situation in which we have a set of concepts and a set of relations among these concepts, and the data consists of few instances of these relations that hold among the concepts; the aim is to infer other instances of these relations. My approach is to learn from the data a representation of the objects in terms of the features which are relevant for the set of relations at hand, together with the rules of how these features interact. I then use these distributed representations to infer how objects relate. Linear Relational Embedding (LRE) is a method which has recently been proposed [1, 2] for learning distributed representations for objects and relations. It finds a mapping from the objects into a feature-space by imposing the constraint that relations in this feature-space are modelled by linear operations. Having learned such distributed representations, it becomes possible to learn a probabilistic model which can be used to infer both positive and negative instances of the relations. LRE shows excellent generalization performance. On a classical problem results are far better than those obtained by any previously published method. I also discuss results on other problems, which show that the generalization performance of LRE is excellent. Moreover, after learning a distributed representation for a set of objects and relations, LRE can easily modify these representations to incorporate new objects and relations. Learning is fast and LRE rarely converges to solutions with poor generalization. Due to its linearity LRE cannot represent some relations of arity greater than two. I therefore discuss Non-Linear Relational Embedding (NLRE), and show that it can represent relations that LRE cannot. A probabilistic model, which can be used to infer both positive and negative instances of the relations, can also be learned for NLRE. Finally, Hierarchical LRE and Hierarchical NLRE are modifications of the above methods for learning a distributed representation of variable-sized recursive data structures. Results show that these methods are able to extract semantic features from trees and use them to generalize correctly to novel trees.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Distributed Representations of Concepts Using Linear Relational Embedding

In this paper we introduce Linear Relational Embedding as a means of learning a distributed representation of concepts from data consisting of binary relations between concepts. The key idea is to represent concepts as vectors, binary relations as matrices, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related conce...

متن کامل

Extracting Distributed Representations of Concepts and Relations from Positive and Negative Propositions

Linear Relational Embedding (LRE) was introduced (Paccanaro and Hinton, 1999) as a means of extracting a distributed representation of concepts from relational data. The original formulation cannot use negative information and cannot properly handle data in which there are multiple correct answers. In this paper we propose an extended formulation of LRE that solves both these problems. We prese...

متن کامل

Learning Multi-Relational Semantics Using Neural-Embedding Models

Real-world entities (e.g., people and places) are often connected via relations, forming multirelational data. Modeling multi-relational data is important in many research areas, from natural language processing to biological data mining [6]. Prior work on multi-relational learning can be categorized into three categories: (1) statistical relational learning (SRL) [10], such as Markovlogic netw...

متن کامل

Learning Distributed Representations by Mapping Concepts and Relations into a Linear Space

Linear Relational Embedding is a method of learning a distributed representation of concepts from data consisting of binary relations between concepts. Concepts are represented as vectors, binary relations as matrices, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related concept. A representation for concepts and r...

متن کامل

Towards a Unified Framework for Transfer Learning: Exploiting Correlations and Symmetries

Recent work has shown that neuralembedded word representations capture many relational similarities, which can be recovered by means of vector arithmetic in the embedded space. We show that Mikolov et al.’s method of first adding and subtracting word vectors, and then searching for a word similar to the result, is equivalent to searching for a word that maximizes a linear combination of three p...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002